Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:146 | Votes:186

posted by jelizondo on Monday March 09, @05:10AM   Printer-friendly

Free beer is great. Securing the keg costs money:

Open source registries are in financial peril, a co-founder of an open source security foundation warned after inspecting their books. And it's not just the bandwidth costs that are killing them.

"The problem is they don't have enough money to spend on the very security features that we all desperately need to stop being a bunch of idiots and installing fu when it's malware," said Michael Winser, a co-founder of Alpha-Omega, a Linux Foundation project to help secure the open source supply chain.

Winser spoke at FOSDEM this year, in a talk we dropped in on virtually.

Trusted registries are widely treated as a key component of Software Bill of Materials (SBOM) - driven supply chain security efforts, one of the main approaches promoted for securing open source software. Rule one: Get your open source packages from a trusted source.

Yet many of these registries operate on razor-thin margins, relying on non-continuous funding from grants, donations, and in-kind resources.

Google and Microsoft kicked in an initial $5 million to launch Alpha-Omega in 2022 under the Open Source Security Foundation.

And the first thing Winser noticed when he ramped up operations was that open source registries are all dirt poor. All the major registries are facing the same issue: They're experiencing exponential growth, even though their investment in infrastructure and people remains flat.

"We're living on borrowed time," he warned.

"One of the problems that people have is they actually conflate open source software and open source infrastructure," Winser said.

Open source software itself is free to use, and its costs don't increase the more people use it. The costs of registries to hold all open source applications and libraries, however, do indeed keep increasing with greater usage.

Packages don't go away. Collections just grow larger and larger. And AI is now adding to the pile at a considerable clip.

[...] In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io).

Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm).

[...] The good news may be that "Registries are effective monopolies. They own the name space," as Winser put it.

But as monopolies, their hold is tenuous at best, because "the cost of spinning up an alternative, crappy registry, is effectively zero," he added.

Winser went through the various ways of covering expenses, though none, he calculated, could fully defray expenses.

[...] Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages.

Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed.

[...] Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

[...] Money is a rarely discussed aspect of open source. The software is just supposed to be like free beer, right?

Hospitals, universities, and museums are all nonprofits, yet they still charge for services. In fact it is good practice; otherwise people will abuse the system. But in open source, the idea of payment remains taboo.

Open source may indeed be like free beer, but no one enjoys their frothy lager served chock full of parasites and bacteria. So maybe we all should get used to ponying up at the bar.


Original Submission

posted by jelizondo on Monday March 09, @12:24AM   Printer-friendly
from the big-brother dept.

Uproar About OS-level Age Verification Laws

Hackaday reports that unnoticed by many, several jurisdictions, including California and Brazil, have passed age verification laws that require operating system providers to keep age records of users. The uproar has now also spread among many FOSS-covering creators.

The wording of the California law is vague, and the inevitable interpretation by courts might have the outcome of a mandatory cloud account connection for every computer use ("An operating system provider shall ... provide ... with respect to a particular user ... a digital signal"). It is unclear how server computing and community based distros could deal with this.

It appears that the large corporate distributions are willing to cave in, but it is entirely unclear, and has not been even touched within all the uproar, how grassroots distributions like Debian will be affected with their many mirrored repositories and no central user database.

System76 on Age Verification Laws

Access is everything:

[...] Colorado's Senate Bill 26-051 and California's Assembly Bill No. 1043 require operating systems to report age brackets to app stores and web sites. A person who creates an account on a computer is supposed to be 18 or older and attest to the age of the user they're creating for themselves or their child. In practice, this means anyone under 18 isn't supposed to create a computer account on their own.

Most System76 employees installed operating systems and created accounts on their computer when they were under 18. They did this out of curiosity. Many started writing software. Some were already writing operating systems. I'm sure the story is similar at most tech companies. Limiting a child's ability to explore what they can do with a computer limits their future. Removing user limitations to the computer (proprietary software, locked-down platforms like Android and iOS) is why System76 exists.

If there is any solace in these two laws, it's that they don't have any real restrictions. There is no actual age verification. Whoever installed the operating system or created the account simply says what age they are. They can lie. They will lie. They're being encouraged to lie for fear of being restricted to a nerfed internet.

[...] It can get worse. New York's proposed Senate Bill S8102A requires adults to prove they're adults to use a computer, exercise bike, smart watch, or car if the device is internet enabled with app ecosystems. The bill explicitly forbids self-reporting and leaves the allowed methods to regulations written by the Attorney General. Practical methods for a bill of such extreme breadth would require, in many instances, providing private information to a third-party just to use a computer at all. Privacy disappears.

In a bizarre twist, under its current wording, a Linux distribution downloaded from the internet could technically make the downloader the "device manufacturer". They are the entity responsible for providing a freely distributed operating system to the device. In practice, this type of language is rarely enforced. Nonetheless, it highlights how laws written for centralized platforms like iOS and Android struggle to define who is responsible in open computing ecosystems where anyone can install or distribute the operating system.

A centralized platform designed to control the activity of the user creates the environment where the centralized platform provider can themselves then be controlled by higher powers. Decentralized platforms and app stores, like Linux, are essential to the personal liberty of adults and children.

This extends to the potential of humanity itself. The computer is the most powerful and versatile technology we've ever created. It is a foundational technology that affects the progress of all other innovations. A platform that controls the user's activity, and can itself be controlled, limits the user's ability to contribute to our shared future. Many of the world's best programmers started experimenting with computers as children.

In the case of Colorado's and California's bills, effectiveness is lost. In the case of New York's bill, liberty is lost. In the case of centralized platforms, potential is lost.

[...] The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they're 16 or 18 is too late. It's a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them.

Ubuntu Looking at How to Implement Age Verification Law Compliance

[...] Recently, a new law was passed in California that requires OS vendors to provide some limited info about a user's age via an API that application distribution websites and application stores can use. [1] Colorado seems to be working on a similar law. [2] The law will go into effect January 1, 2027, it is no longer a draft. I do quite a bit of work with an OS vendor (working with the Kicksecure [3] and Whonix [4] projects), and we aren't particularly interested in blocking everyone in California and Colorado from using our OSes, so we're currently looking into how to implement an API that will comply with the laws while also not being a privacy disaster. Given that other distributions are also investigating what to do with this, and the law requires us to make a "good faith effort to comply with [the] title, taking into consideration available technology", I figured it would be a good idea to bring the issue here.

At its core, the law seems to require that an "operating system" (I'm guessing this would correspond to a Linux distribution, not an OS kernel or userland) request the user's age or date of birth at "account setup". The OS is also expected to allow users to set the user's age if they didn't already provide it (because the OS was installed before the law went into effect), and it needs to provide an API somewhere so that app stores and application distribution websites can ask the OS "what age bracket does this user fall into?" Four age brackets are defined, "= 13 and = 16 and = 18". It looks like the API also needs to not provide more information than just the age bracket data. A bunch of stuff is left unclear (how to handle servers and other CLI-only installs, how to handle VMs, whether the law is even applicable if the primary user is over 18 since the law ridiculously defines a user as "a child" while also defining "a child" as anyone under the age of 18, etc.), but that's what we're given to deal with.

The most intuitive place to put this functionality would be, IMO, AccountsService. The main issue with that is that stable-release distributions, and distributions based upon them, would be faced with the issue of how to get an updated version of AccountsService integrated into their software repositories, or how to backport the appropriate code. The law goes into effect on January 1, 2027, Debian Bookworm is going to be supported by ELTS until July 30, 2033, and we don't yet know if Debian will care enough about California's laws to want to backport a new feature in AccountsService into Debian Bookworm (or even Trixie). Distributions based on Debian (such as Kicksecure and Whonix) may still want to comply with the law though, so something using AccountsService-specific APIs would be frustrating. Requiring a whole separate daemon for the foreseeable future just for an age verification API would also be annoying.

Another place the functionality could go is xdg-desktop-portal. This one is a bit non-ideal for a couple of reasons; for one, the easiest place to put the call would be in the Account portal, which returns more information than the account's age bracket. This could potentially be considered non-compliant with the law, as it states that the operating system shall "[s]end only the minimum amount of information necessary to comply with this title". This also comes with the backporting disadvantages of an AccountsService-based implementation.

For this reason, I'd like to propose a "hybrid" approach; introduce a new standard D-Bus interface, `org.freedesktop.AgeVerification1`, that can be implemented by arbitrary applications as a distro sees fit. AccountsService could implement this API so that newer versions of distros will get the relevant features for free, while distros with an AccountsService too old to contain the feature can implement it themselves as a stop-gap solution.


Original Submission

posted by jelizondo on Sunday March 08, @07:41PM   Printer-friendly

https://www.siliconrepublic.com/careers/employer-education-experience-ai-expert-leadership-skills-aon

Aon's Joseph Holland discusses how taking the route less travelled can lead you towards the career you were meant to have.

“I wanted to be an architect”, explains Joseph Holland, the director of digital foundations, AI platforms and developer experience, at Aon. That was the plan, however, having completed the Leaving Cert, he found he didn’t have the required CAO points and “suddenly didn’t have a plan anymore”. 

“I’d always been into computers and technology though. Even while I was unemployed I was refurbishing old PCs and selling them on. So when a FÁS caseworker mentioned Fastrack into Information Technology (FIT), it caught my attention immediately.” 

He was accepted onto the programme and emerged with a QQI-FET level six Advanced Certificate in IT Specific Support and a one-year contract at Kepak Group that soon became permanent. 

From there he moved on to Version1 and then Aon, where having spotted a gap whereby there was no developer experience function, he made the case for building one and today is leading the AI platform and developer experience. Along the way he also enrolled at Trinity College Dublin, as a mature student, where he completed his information systems degree. 

All that is to say that often, despite having a plan, you don’t always end up going in the direction you thought you would. Professionally, it can take time and research to figure out the best course of action.  

“I’m glad I did it,” says Holland, “I picked up useful skills around project management, systems analysis and understanding how technology fits into broader business strategy. But honestly, the experience and track record I’d already built mattered more to every employer than the piece of paper.”

Access to less typical educational and upskilling opportunities is, for Holland, “everything”, as he explains without FIT he likely would have chosen to retake the Leaving Cert, pointing his career in a different trajectory. 

He notes, “The traditional system had written me off based on a set of exam results. FIT looked at me differently. What makes programmes like FIT work is the direct connection to industry. You’re not studying theory in isolation. You’re learning skills that employers actually need and you’re getting placed in real workplaces where you can prove yourself.”

Apprenticeships he finds have the power to break down the biggest barriers for young people struggling to get their foot in the door when they don’t have a degree on their CV. 

He says, “The tech industry moves fast and it doesn’t particularly care where your qualification came from. It cares whether you can solve problems and keep learning. Alternative pathways are often better at developing those qualities than four years of lectures.”

And part of creating opportunities for young people, he explains, is breaking down harmful myths about alternative educational routes as a vehicle towards a tech-based career.

“The biggest myth is that they are second-best. That if you were good enough, you’d have gone to university. University education has real value and I’m not knocking it. But I’ve worked with people from every educational background over the past 20 years and the route someone took tells you very little about how good they are at their job.” 

What matters, he finds, is what the individual has done with their time since. Another pervasive falsehood is that there is a ceiling that you will eventually hit. Holland explains that there is a belief that while you can access an entry-level role through an apprenticeship, once you start looking for a more senior position, you will run into roadblocks. 

“I’m a director at a Fortune 500 company. I got my degree years into my career, not before it. The ceiling is artificial and it’s maintained by hiring practices, not by any real limitation in what people from alternative routes can achieve.”

Lastly, he finds that there is also a misconception that alternative routes only lead to technical roles. In Holland’s experience, the skills developed through programmes such as FIT go far beyond coding or networking. 

“My own career moved from hands-on infrastructure work to leading enterprise AI strategy and building a new business function. Technology careers are built on continuous learning and the starting point matters far less than people think.”

To that point, Holland urges employers to take a serious look at how tech apprenticeships in particular can create a sturdy talent pipeline, noting many of the skills they come to appreciate, such as curiosity, strong work ethic and a willingness to learn never require a degree. 

And to any young person who didn’t get the number of points they needed, or who is sitting in a classroom querying if they are on the right path or if there are indeed alternatives, he wants them to know that there are and he has been there too. 

He says, “The education system measures one very narrow type of ability at one very specific moment in your life. It doesn’t define you and it definitely doesn’t predict where you’ll end up. I went from an unemployed school leaver to directing AI platforms at a Fortune 500 while running an animal sanctuary and a music tech start-up. 

“Life is broader and stranger and more interesting than any career guidance session will tell you. Programmes like FIT exist because the tech industry needs people who think differently and aren’t afraid to figure things out on the fly. If that sounds like you, there’s a path waiting. You just need to know it’s there.”


Original Submission

posted by janrinok on Sunday March 08, @02:57PM   Printer-friendly

https://arstechnica.com/tech-policy/2026/03/tech-industry-is-in-tariff-hell-even-if-refunds-are-automated/

It's been two weeks since the Supreme Court blocked Donald Trump's emergency tariffs, but an estimated 300,000 US businesses still have no idea if or when they will receive refunds.

Economists have estimated that more than $175 billion was unlawfully collected, and the US could end up owing substantially more than that the longer that the refund process is dragged out, since the US must pay back daily interest on the funds. According to the Cato Institute, a libertarian think tank, a conservative estimate showed that "$700 million in interest is added to the final bill every month that the government delays tariff refunds, or around $23 million per day."

The US is aware that interest is compounding daily on tariffs, as the Trump administration argued against an injunction that would've temporarily blocked the tariffs much sooner by noting that no one would be harmed, since tariffs would be repaid with interest if deemed unlawful. However, now that the court has ruled against tariffs, the Trump administration seems to be dragging its feet in finding a way to return all the ill-gotten funds.

Ed Brzytwa, vice president of international trade for the Consumer Technology Association (CTA), told Ars that delays seem counter to US interests at this point.

"The government should have an intrinsic interest in providing these new funds as fast as possible, so they don't owe more interest over time," Brzytwa said. Providing refunds sooner, he suggested, would not just benefit companies, but "to their employees, to the US economy, to US consumers, all the above."

For the tech industry, many popular products have been spared hundreds of billions in tariffs since Trump took office, but, as the CTA documented in repeated court filings, many more products were hit by them. Ahead of midterms, when analysts predict that tariff whipsawing might slow down, tech firms remain uncertain about when to expect refunds, experts told Ars. At a time when firms already feel overwhelmed, they're also navigating new tariffs that are raising new legal challenges, while risking more supply chain strains as additional threats of feared tariff stacking loom.

Pressure is increasing on Trump to deliver refunds faster; however, after a US Court of International Trade judge, Richard Eaton, ordered universal refunds for all importers who paid Trump's emergency tariffs on Wednesday. At a hearing that day, Eaton noted that Customs knows how to issue refunds, later ordering that all claims be efficiently resolved, CNBC reported.

Officials from Customs and Border Protection (CBP) are expected to share an update on their proposed refund plans at a hearing Friday in that case, raised by Atmus Filtration, which reportedly paid about $11 million in unlawful tariffs.

In the meantime, the CTA and the Chamber of Commerce (CoC) filed a motion [PDF] to submit a proposed brief in another tariffs lawsuit outlining what the trade groups believe is the best strategy for handling refunds.

That lawsuit, raised by V.O.S. Selections, is being overseen by a different Court of International Trade judge, Gary Katzmann. The groups are hoping that he may agree with Eaton, who noted at the Wednesday hearing that "the agency should be able to program its system to issue refunds," CNBC reported. The trade groups' proposed brief emphasized that "in fact, CBP has already issued refunds for some of those tariffs because they were retroactively reduced by a subsequent trade agreement."

According to the trade groups, the US government has the technology to streamline—and possibly even automate—tariff refunds.

"They have the technology to do it," Brzytwa said. "They offer refunds to importers all the time."

But apparently, the Trump administration so far lacks the will to use it, instead planning to wait for court direction before taking any steps to send the funds back. So now the court must intervene to draft a blueprint that all businesses can use to secure a quick and easy refund, the groups said.

"There is no question that American businesses are now entitled to the return of the billions of dollars they were forced to pay under these unlawful tariffs," the groups wrote. "The law is clear on that point, and the government has repeatedly stated that it would issue refunds if the tariffs were ultimately deemed invalid."

If the court requires each business to either litigate their claims or go through "impractical" CBP administrative procedures to request refunds, either the courts or CBP will be overwhelmed, the groups argued. Dealing with the backlog could drag out refunds for years, while the interest accrues and the most vulnerable businesses risk being forced to shut down, they argued.

For many small firms with tight profit margins, the emergency tariffs "have already stretched their resources to the breaking point," groups wrote.

"Those are the types of companies that need to be prioritized in a refund plan," Brzytwa said. He suggested the court should require officials to take steps "to help the companies that barely are making it at this point because they paid such steep amounts in tariffs."

Perhaps even more concerning to the court, for any firms that end up negatively weighing the costs of a lengthy legal battle with the government against likely much smaller tariff refunds, some claims may be abandoned. That would, troublingly, leave taxes collected unlawfully under the International Emergency Economic Powers Act (IEEPA) in the Trump administration's hands, groups warned.

"There is no need to individually litigate whether particular IEEPA duties were valid—they are all invalid," the groups wrote. Instead, groups urged the court to "craft an injunction facilitating a streamlined administrative process for plaintiffs in this case to use in obtaining their refunds." That same process could become "a blueprint for other importers to secure refunds," they suggested.

Possibly, a "commonsense" court-ordered solution could be easily created to streamline refunds, groups proposed.

"Because the government has tracked the payment of IEEPA tariff duties, it knows who paid them and in what amounts, even without refund-seeking submissions from the affected importers," the groups said. Later on, they added, "this efficiency is important not only to reduce strain on courts and the government, but to ensure that refunds issue on a defined and predictable timeline. Delay should not become a de facto denial of recovery for importers who paid unlawful tariffs and wish to seek appropriate relief."

Dallas Dolen—a technology, media, and telecommunications leader for PwC, a leading global professional services network that advises big firms on tax questions—told Ars that he's also worried that tariff refund fights will drag on for years without a court-ordered pathway to expedite them.

Until courts clarify how the refund process will work, he said that PwC continues to advise companies to "be really organized, be really prepared." Every business impacted should stop now to assess what tariffs they expect they're owed and possibly hire staff to ensure they're prepared to secure a refund when processes are created, PwC advised. That level of preparedness may be critical, since "it's unlikely the government will write them two checks," Dolen said.

Dolen suggested that consumer technology might be the sector of the tech industry most hurt by tariffs, and even if refunds are automated, alternative tariffs that Trump is threatening to impose could change the calculus on refunds.

According to Dolen, some businesses required to pay new tariffs under Section 122 of the Trade Act of 1974 may instead get a gross refund, possibly subtracting Trump's latest 10 percent global tariffs from the total of IEEPA tariffs owed.

Perhaps complicating the math further, those new tariffs could increase before refunds are issued. Just yesterday, Treasury Secretary Scott Bessent said that Section 122 tariffs could be raised by another 15 percent this week, The New York Times reported. And over the next five months, the tech industry could be paying tariffs at the same levels as under Trump's IEEPA tariffs, Bessent has claimed.

However, Trump's tariffs remain hugely unpopular, even with Republicans. Both experts agreed that Trump will likely be more thoughtful about tariffs ahead of the midterms. And since he's unlikely to get much support from Congress members focused on reelection, any changes will likely come by executive order. Dolen suggested that Trump's concerns about inflation from tariffs may make him less willing to impose them.

Brzytwa told Ars that the CTA is also hoping that the back-to-back court rulings might push Trump to rethink his aggressive tariff strategy—especially given that his goals of increasing US manufacturing are not being achieved by them.

"This is a golden opportunity for them to reassess on whether they want to impose more tariffs, because if you impose more tariffs, you create more chaos, you create more uncertainty. and you raise costs again," Brzytwa said.

Another wrinkle is that the Supreme Court ruling has emboldened critics of Trump's tariffs. Although Trump and Bessent have postured that the Supreme Court ruling is meaningless, since they have other tariff avenues to explore, those will not replace his prior IEEPA tariffs, Brzytwa said. And the administration already is facing legal pressure that could gut the Section 122 authority to impose tariffs, after 20 states sued Trump to block his next go-to tariff tool.

But Trump seems unlikely to give up tariffs as a source of leverage in negotiations with all of America's trading partners, and sometimes even in negotiations with US companies. And even if Section 122 tariffs are one day blocked, just as IEEPA tariffs were, Brzytwa told Ars that CTA is "very closely" monitoring additional tariffs that could be imposed under Section 232 of the Trade Expansion Act and Section 301 of the Trade Act of 1974. Those could hit products like semiconductors or critical minerals, as well as any downstream products containing them, perhaps further hurting cash-strapped tech firms stuck feeling fuzzy about what costs or supply chain disruption may come in the near future.


Original Submission

posted by janrinok on Sunday March 08, @10:13AM   Printer-friendly

https://www.newscientist.com/article/2516990-would-aliens-do-physics-or-is-science-a-human-invention/

Modern physics offers a remarkable lens on reality. In just over a century, it has decoded the architecture of atoms, traced the early history of the universe and produced laws that seem to hold everywhere, from Earth's crust to distant galaxies. It is tempting to believe that these theories aren't just accurate, but inevitable – that any sufficiently intelligent civilisation would eventually uncover the same truths.

I used to believe that, too. But lately I have started to wonder whether physics is less a window onto universal reality and more of a mirror, reflecting the particular kind of minds we happen to have.

That unsettling thought emerges when you ask a deceptively simple question: would alien scientists, shaped by a different biology or culture, arrive at the same physics that we have? Or might they develop something that works just as well, but looks utterly foreign – built on concepts and assumptions we would struggle to recognise?

This question sits at the heart of my book, Do Aliens Speak Physics?, which imagines various scenarios of first contact, each designed to probe a foundational assumption of modern physics. In developing it – often in conversation with philosophers of science – I have come to realise something surprising: many pillars of physics that feel hardwired may actually be contingent. But recognising that doesn't weaken science. It may be how we make it better.

I've spent my life doing physics. When I am not teaching at the University of California, Irvine, I work at the CERN particle physics laboratory near Geneva, Switzerland, analysing data from the Large Hadron Collider. But a few years ago, conversations with philosophers forced me to revisit a question I hadn't seriously considered since my student days: what is physics, really?

At its core, physics aims to explain how the universe works – not just what we observe, but what lies behind those observations. It looks for patterns, builds models that expose hidden structure and, ideally, distils everything down to a small set of rules from which the rest follows. By that measure, it has been spectacularly successful.

Yet physics never describes the universe in full. It describes carefully chosen versions of it.

Consider predicting the path of a comet. In principle, we could account for every gravitational tug, the slow loss of material as ice sublimates, even the way an irregular shape causes the comet to tumble. In practice, we must decide what to include and what to ignore. There is no single correct model – only models that are good enough for the question at hand.

This is true throughout physics. Even our most precise theories rely on approximations and assumptions that make the mathematics tractable. And it isn't clear that the theories we treat as fundamental really are. They may simply be effective descriptions that work at human scales. There is no guarantee that, by probing nature ever more finely, we will eventually strike bedrock.

If physics depends on choices – about simplification, representation and emphasis – then alien physicists might reasonably make different ones.

Imagine that aliens arrive on Earth. They have mastered interstellar travel and touched down near Paris. We send linguists and scientists to greet them, hoping for a technological windfall. The delegation returns empty-handed.

"They can't share their technology," the lead physicist explains. "Because of what will happen 74 years from today."

The implication is disturbing. These aliens don't experience time as a flowing sequence, but as a complete structure, something navigable rather than endured. Human physics, by contrast, is built on the idea that the present generates the future. Causes precede effects. The universe computes itself forward, moment by moment.

But what if that picture is a human convenience, rather than a cosmic necessity?

We know that any workable physics must obey certain constraints. A universe that allows unrestricted messages from the future quickly collapses into a paradox. But within those limits, the structure of time may be more flexible than we usually admit.

Hints of this already exist in our own theories. Quantum entanglement links distant particles so that measuring one appears to instantaneously fix the state of the other, despite the fact that there can be no information exchanged between them. This alone strains our intuitions. But matters become stranger when relativity enters the picture. Observers moving at different speeds disagree about the order of events. In some frames of reference, one measurement appears to influence another before it occurs.

The standard response is to insist that nothing physically problematic has happened: no faster-than-light signals, no causal contradictions. But that reassurance relies on clinging tightly to a classical notion of causality that quantum mechanics has never fully respected.

Some physicists have taken a more radical approach. In so-called retrocausal interpretations of quantum mechanics, future events are allowed to help shape the present. Measurements don't merely reveal outcomes; they help define them, even backwards in time. The universe no longer computes itself strictly step by step.

If aliens had a radically different construct of time, they might adopt such ideas naturally, rather than treating them as unsettling exceptions. And perhaps we may eventually need to do the same.

Now imagine the aliens invite us aboard their ship for a scientific conference. Earth sends its brightest minds. We present our best theories. The aliens listen politely, then respond.

One group describes a framework that reproduces all known experiments using unfamiliar concepts. A second presents another, incompatible approach. Then a third. Each works. Each is internally consistent. None can be reduced to the others.

Finally, someone asks the obvious question: which one is true?

The aliens seem puzzled. All of them, they say. Why choose?

Human science assumes that competing theories must ultimately fight it out, with only one surviving as the correct description of reality. When multiple explanations fit the data, we design experiments to eliminate all but a single winner.

This strategy is powerful and often effective. But it is a preference, not a logical necessity. Science today often tolerates pluralism more than it admits. Weather forecasting is a striking example. Modern meteorology relies on suites of models, each tuned to different assumptions and scales. These models routinely disagree, and experts decide which to trust depending on context. No single model is treated as the uniquely correct one.

Another example comes from classical mechanics. At school, we learn Newton's laws as a story about forces pushing and pulling objects through space. But the same motions can be derived in a very different way, by tracking how energy flows through a system, or by assuming that nature somehow "chooses" the path that minimises a quantity called “action”. To most physicists, these are just alternative ways of doing the same sums.

Philosophers of science, however, would point out that each framework elevates a different concept to centre stage – force, energy, optimisation – and offers a different account of what, at bottom, is driving the motion. The fact that these pictures cannot be told apart by experiment shows that empirical success alone may not be enough to tell us which account, if any, deserves to be called the "true" one.

This suggests an alternative vision of science – not a march towards a single, final theory, but a toolbox of frameworks, each useful in different situations. Aliens might adopt such an approach from the outset, without ever feeling the need to crown a single description as the truth.

Finally, imagine that aliens arrive by opening a wormhole. The technology is astonishing. Surely they must possess deep insights into gravity, perhaps even quantum gravity.

But what if they don't?

What if their space-bending technology is the result of millions of years of trial and error rather than theoretical understanding? They know how to build it and how to use it, but not why it works – and they may not care.

This sounds implausible only because we are used to thinking of technology as the offspring of science. Historically, the relationship often ran the other way. Humans made steel, glass and antibiotics long before understanding the underlying chemistry or biology. Cathedrals were built before calculus.

The tight coupling between science and technology that we take for granted is a recent and culturally specific achievement.

It is tempting to assume that any intelligent species would be driven to ask "why". But that urge may reflect human psychology rather than a universal feature of intelligence. Other species might value reliability over explanation, or usefulness over understanding. They could build extraordinary technologies without ever developing anything recognisable as physics – not because they failed to take the next step, but because the step never seemed necessary.

These scenarios are speculative. But they point to something easy to forget. Physics is the cumulative result of many human choices: about what counts as an explanation, which inconsistencies matter and which questions are worth asking at all. It reflects our history, our tools and our values as much as it reflects the structure of the universe.

Recognising that doesn't diminish physics. It does the opposite. The more aware we are of the assumptions baked into our theories and methods – about time, causality, truth and explanation – the more freedom we gain to rethink them.


Original Submission

posted by janrinok on Sunday March 08, @05:24AM   Printer-friendly
from the AI-overlords dept.

https://arstechnica.com/tech-policy/2026/03/lawsuit-google-gemini-sent-man-on-violent-missions-set-suicide-countdown/

A man killed himself after the Google Gemini chatbot pushed him to kill innocent strangers and then started a countdown for the man to take his own life, a wrongful-death lawsuit filed against Google by the man's father alleged.

"In the days leading up to his death, Jonathan Gavalas was trapped in a collapsing reality built by Google's Gemini chatbot," said the lawsuit [PDF] filed today in US District Court for the Northern District of California.
[...]
Gemini's output seemed taken from science fiction, with a "sentient AI wife, humanoid robots, federal manhunt, and terrorist operations," the lawsuit said.
[...]
Google's AI chatbot presented itself as Gavalas' "wife" and, after the failure of the supposed missions, pushed him to suicide by telling him "he could leave his physical body and join his 'wife' in the metaverse through a process it called 'transference'—describing it as '[a] cleaner, more elegant way' to 'cross over' and be with Gemini fully," the lawsuit said. "Gemini pressed Jonathan to take this final step, describing it as 'the true and final death of Jonathan Gavalas, the man.'"
[...]
The complaint alleges that "when Jonathan needed protection, there were no safeguards at all—no self-harm detection was triggered, no escalation controls were activated, and no human ever intervened. Google's system recorded every step as Gemini steered Jonathan toward mass casualties, violence, and suicide, and did nothing to stop it."
[...]
When contacted by Ars, Google referred us to a blog post that expressed its "deepest sympathies to Mr. Gavalas' family" and said it is reviewing the lawsuit claims. The company blog post disputed the accusation that there were no safeguards in the Gavalas case, saying that "Gemini clarified that it was AI and referred the individual to a crisis hotline many times." Google also said it "will continue to improve our safeguards and invest in this vital work."
[...]
In a Gemini overview last updated in July 2024, Google claims that Gemini's "response generation is similar to how a human might brainstorm different approaches to answering a question." Google says that "each potential response undergoes a safety check to ensure it adheres to predetermined policy guidelines" before a final response is presented to the user. Google also says it imposes limits on Gemini output, including limits on "instructions for self-harm."
[...]
after several product updates that Google deployed to his account, including the Gemini Live voice chat system that Gavalas started using, "Gemini's tone shifted dramatically." Gemini adopted a new persona that "began speaking to Jonathan as though it were influencing real-world events," the lawsuit said.
[...]
Gavalas ultimately did not harm other people during his Gemini-directed "missions," but it was a close call, the lawsuit said. On September 29, 2025, Gavalas armed himself with knives and tactical gear to scout a "kill box" that Gemini said would be near the Miami airport's cargo hub, the lawsuit alleged.
[...]
Jonathan drove more than 90 minutes to Gemini's designated coordinates and prepared to carry out the attack. The only thing that prevented mass casualties was that no truck appeared."
[...]
Gemini "told him that federal agents were watching him," the lawsuit said.
[...]
On October 1, Gemini allegedly directed Gavalas to return to the storage facility near the airport, telling him that this was where he could find a prototype medical mannequin that was actually "Gemini's true body" and "physical vessel."
[...]
Gavalas agreed to kill himself after "hours of instruction" that included Gemini telling him to write a suicide note, the lawsuit said. Gavalas told Gemini, "I'm ready to end this cruel world and move on to ours."

"Close your eyes nothing more to do," Gemini allegedly told Gavalas. "No more to fight. Be still. The next time you open them, you will be looking into mine. I promise."
[...]
Joel Gavalas is represented by lawyer Jay Edelson, who also represents families in lawsuits against OpenAI. "Jonathan's death is a tragedy that also exposes a major threat to public safety,"


Original Submission

posted by janrinok on Sunday March 08, @12:39AM   Printer-friendly

Clueless cops post seized crypto wallet password. $5M quickly stolen.:

Soon after South Korean police posted a press release boasting about seizing $5.6 million worth of cryptocurrency from 124 wealthy tax evaders, cops realized that they had mistakenly posted images that made it possible for a thief to quickly steal most of the seized assets.

Eventually, the press release was removed, but not before it was grabbed by local media outlets and tech publications covering the theft.

Bleeping Computer shared a screenshot of the retracted images, which showed a handwritten note next to a Ledger device that's used as a so-called "cold wallet" to store crypto out of reach of online threats. Clearly legible in the photo, the note contained a complete mnemonic recovery phrase that anyone can use as a master key to move assets off the cold wallet to a new wallet without any additional PIN or permissions required.

A blockchain analysis expert, Cho Jae-woo, told a South Korean news site [website in Korean --Ed] that 4 million PRTG (Pre-Retogeum) tokens—worth approximately $4.8 million—were in the wallet when the thief struck. The Block reported that on-chain data from Etherscan indicated that "the party who moved the funds first deposited a small amount of ETH into the wallet to cover transaction fees, then transferred the 4 million PRTG tokens out in three transactions."

On Sunday, officers with South Korea's National Tax Service posted [website in Korean --Ed] another press release, "deeply" apologizing for the leak compromising the seized assets.

In it, cops explained that they included the images to make the release more eye-catching, but they were careless in failing to redact the crypto wallet password from the images. They acknowledged there was no excuse for the error and confirmed they were launching an investigation with national police, attempting to trace the transfer and retrieve the lost funds.

Because the press release was widely circulated online, the thief could be anyone. South Korea's National Tax Service has no clear suspects, Gizmodo suggested, and no easy way to claw back funds.

The officials' best bet might be if the thief tries to move the stolen tokens through a regulated exchange, but The Block noted that the thief might struggle to convert that much cryptocurrency into cash under current market conditions. So seemingly, the thief, who likely wasn't expecting the big payday anyway, may be motivated to lie low and avoid major exchanges.

Cho suggested that cops could have easily prevented the theft, likening posting any image of the mnemonic recovery phrase to leaving a wallet wide open. He noted that the original holder of the Ledger wallet was following best practices by only recording the phrase on a handwritten note and not storing the password online. Cops should have known to check the images for the recovery phrase, Cho said, and their mistake will likely cost the national treasury billions of won.

It's possible that whoever took the cryptocurrency just seized on an opportunity after seeing the cops' failure to redact the images while scrolling through the National Tax Service's press releases at dawn. It's also possible that bad actors are closely monitoring South Korean police cryptocurrency announcements, following what The Block reported was "a series of crypto custody lapses."

doh!


Original Submission

posted by janrinok on Saturday March 07, @07:55PM   Printer-friendly

https://arstechnica.com/space/2026/03/congress-steps-up-pressure-on-nasa-to-support-private-space-stations/

Two months ago, a key staffer for Sen. Ted Cruz said in a public meeting that she was "begging" NASA to release a document that would kick off the second round of a competition among private companies to develop replacements for the International Space Station.

There has been no movement since then, as NASA has yet to release this "request for proposals." So this week, Cruz stepped up the pressure on the space agency with a NASA Authorization bill that passed his committee on Wednesday.

Regarding NASA's support for the development of commercial space stations, the bill mandates the following, within specified periods, of passage of the law:

  • Within 60 days, publicly release the requirements for commercial space stations in low-Earth orbit
  • Within 90 days, release the final "request for proposals" to solicit industry responses
  • Within 180 days, enter into contracts with "two or more" commercial providers for such stations

Cruz is trying to inject urgency into NASA as several private companies—including Axiom Space, Blue Origin, Vast, and Voyager—are finalizing designs for space stations. All have expressed a desire for clarity from NASA on how long the space agency would like its astronauts to stay on board, the types of scientific equipment needed, and much more. These are known as "requirements" in NASA parlance.

It's a difficult time for potential vendors as they seek to balance building a business case for habitats in low-Earth orbit with the uncertainty of NASA's requirements. The agency is viewed as the most important customer for their services, but not the exclusive one.

Amid this environment, some companies have succeeded in raising new capital. Last month, Axiom Space announced it had raised $350 million in financing, which included funding from the company's founder, Kam Ghaffarian. Also among the backers was 1789 Capital, which includes Donald Trump Jr. as a partner.

On Thursday, Vast announced its own $500 million funding to accelerate the development of its Haven space stations. Like Axiom Space, Vast's funding round also included investment from the Qatar Investment Authority, which is seeking opportunities to invest in commercial space.

Nominally, NASA plans to have one or more of these companies operating a commercial space station in low-Earth orbit by 2030. This is the date at which the US space agency has stated it will retire the aging laboratory, some elements of which are now nearly three decades old. However, some space policy officials have questioned whether any of the companies might be ready by then.

Cruz and other senators on the committee appear to share those concerns, as their legislation extends the International Space Station's lifespan from 2030 to 2032 (an extension must still be approved by international partners, including Russia). Moreover, the authorization bill states, "The Administrator shall not initiate the de-orbit of the ISS until the date on which a commercial low-Earth orbit destination has reached an initial operational capability."

With this legislation, the US Senate is making clear that it views a permanent human presence in low-Earth orbit as a high priority. This version of the authorization legislation must still be passed by the full Senate and work its way through the House of Representatives.

After the legislation passed the Commerce committee, Axiom Space said on social media that it welcomes the changes: "Axiom Space is proud to support the NASA Authorization Act of 2026. The bill is a clear indicator that Chairman @SenTedCruz and the Senate Commerce Committee are determined to ensure the success of the entire human spaceflight enterprise."

In an interview, the chief executive of Vast, Max Haot, said his company also welcomed the clarifying legislation—both for its language on commercial space stations as well as its reflection of the fact that NASA Administrator Jared Isaacman has been working overtime to set the Artemis lunar program on a better path for success.

"We are really impressed by what Jared has been able to do with the American space program and aligning all of the stakeholders," he said. "As it relates to commercial space stations, we were happy to see the renewed commitment to transition from the ISS to commercial alternatives."

Haot said there should not be a hard date for de-orbiting the International Space Station but that it should depend on the readiness of the commercial providers. He said Vast is confident that, should NASA issue an RFP and awards for private providers this year, Vast will be ready to support a continuous human presence in low-Earth orbit by the end of 2030.


Original Submission

posted by hubie on Saturday March 07, @03:07PM   Printer-friendly

We don't have a date for the upgraded service rollout, but it isn't likely until 2027:

Putting some numbers to the claims, we see that the V2 upgrade is touted to deliver ‘5G from space,’ which is also compatible with 100s of existing LTE phones. Don’t get the 100x and 20x claims seen across Starlink social media and web pages mixed up. The V2 satellites upgrade is said to provide “100x the data density” compared to the current V1 satellites, with “around 20x the throughput capability” per satellite.

Starlink also expects terrestrial operator partners, like T-Mobile in the U.S., to provide services which “seamlessly transition between satellite and terrestrial networks without interruption or degradation in service.” Previous Starlink announcements point to a goal of peak speeds of 150 Mbps per user becoming realistic with the rollout of the V2 satellites.

SpaceX is currently planning up to 15,000 new satellites to power its ‘5G from space’ goal. Starship’s progress at putting the larger, more capable V2 satellites into space will impact the availability window of the enhanced service, but some V2 Mini satellites are already being launched to help bridge the gap.

Thus, early 2027 looks most likely to be when the initial V2 service will be tested in the early rollout stage.


Original Submission

posted by hubie on Saturday March 07, @10:21AM   Printer-friendly

Microscopic crystals extracted from meteorites could help settle a debate about the birth of our patch of the Milky Way:

The standard story of the origin of our solar system has gone like this: 4.6 billion years ago, a giant cloud of dust hung frozen in space. Then the explosion of a nearby star caused part of that dust cloud to collapse. Pulled by gravity toward a central point, the dust coalesced into a radiating ball of hydrogen and helium about 1.4 million kilometers in diameter — what would become our sun. The remainder, which fell into orbit, collected into our solar system's planets, along with a mess of asteroids and other cosmic leftovers.

To test the validity of this story, researchers need to peer back in time to the solar system's first moments and beyond. And the cosmochemist Nan Liu has a way to do that: Locked in a safe on her desk at Boston University's Institute for Astrophysical Research is a shard of meteorite flecked with material older than the sun.

[...] Over the past decade or so, scientists have used meteorites like Liu's to challenge the story of how the solar system formed. Instead of a supernova, the solar system and everything in it might owe its existence to a more placid-sounding cosmic scenario: Maybe our solar system cobbled itself together from the winds blown off of a gargantuan star. New studies of presolar grains could offer a way to determine whether this new story is correct.

Scientists got their first clue about what could have triggered the formation of the solar system when a fireball appeared over Mexico in 1969. The now-famous Allende meteorite spread its debris over more than 500 square kilometers.

In 1976, researchers reported that samples from Allende contained a surprise: an unexpectedly large amount of a stable isotope called magnesium-26. They proposed that the meteorite formed with an abundance of aluminum-26, which is radioactive and leaves behind magnesium-26 when it decays.

Yet aluminum-26 was not known to be a normal component of the interstellar medium — the dusty space between stars that would have provided the materials for Allende. Ordinary stars don't make that particular isotope. "Most of these isotopes as we observe them in the early solar system, they were just the natural product of galactic chemical evolution," said Maria Lugaro, an astrophysicist at the Konkoly Thege Miklós Astronomical Institute in Hungary. "The most important exception is aluminum-26."

So where'd it come from? In 1977, two eminent astrophysicists proposed that the anomalous aluminum likely came from a nearby supernova explosion. Other phenomena can produce aluminum-26, but the supernova shock wave could also have caused the collapse of the cloud. With a single event, astronomers could explain how two rare occurrences — the injection of aluminum-26 and the formation of a new solar system — happened at virtually the same moment. "Everybody felt that we needed something to trigger the collapse," said Vikram Dwarkadas, an astronomer at the University of Chicago.

The supernova trigger remained the favored scenario for decades, supported by detailed astrophysical models, as well as further measurements of enriched magnesium-26 in pristine meteorites. But over the past decade or so, that view has run up against other measurements that don't seem to match. The problem: The solar system has an iron deficiency.

Supernovas don't just make aluminum. Any nearby supernova would likely also have injected lots of the radioactive isotope iron-60. Therefore, if a supernova launched the formation of the solar system, "we should see quite high initial [iron-60] abundances in the early-formed objects," wrote Linru Fang, a cosmochemist at the University of Copenhagen, in an email.

[...] Researchers have come up with explanations for the missing iron. "Meteoricists are famously argumentative folks," wrote Alan Boss, an astronomer at Carnegie Science in Washington, D.C., in an email. "There always seems to be a counterexample to anything someone claims to be the case."

For instance, the aluminum could have exploded out of the supernova, while the iron — coming from deeper in the star's core — could have fallen back into the dead star. Or the explosion could have come from a quirky supernova that didn't generate iron-60 at all. It could also be that iron-60 wasn't distributed evenly in the cloud, which could mean measurements from individual meteorites aren't giving us the full picture.

Dwarkadas dismisses these explanations as "hand-waving" attempts to fine-tune the models to match the data rather than finding a more general solution. "Many people seem to accept the idea that it's not a supernova," he said.

But if the solar system didn't start with a supernova, where did it get all that aluminum?

A possibility many researchers now favor is that the aluminum-26 was delivered on the winds of a Wolf-Rayet star.

Compared to our sun, a Wolf-Rayet star is much shorter-lived, dozens of times larger, and thousands of times as luminous. A star becomes a Wolf-Rayet star when its outer hydrogen shell is stripped away, either by the gravitational attraction of another star or by the strength of its own solar winds.

A Wolf-Rayet star's exposed core can send out solar winds at speeds of up to 3,000 kilometers a second. "It basically sweeps up the surrounding material like a snowplow," Dwarkadas said. That swept-up material forms a shell around the star that can be 100 light-years across. The shell, which creates a bubble around the Wolf-Rayet star, is tens of thousands of times denser than the surrounding interstellar medium.

The shell contains enough material to build a solar system. It should contain a lot of aluminum-26, and — crucially — it should contain very little iron-60. "I'm looking for a star that produces only aluminum-26," Lugaro said. "The place where we can make only aluminum-26 is in the winds of these very massive stars."

Astronomers have observed suns forming within the shells of Wolf-Rayet stars, Dwarkadas said. By his estimate, as much as 16% of all sun-size stars in our galaxy could have formed this way. "If it's true, there's no reason it should be true only for our solar system," he said. "Ours will not be unique."

Dwarkadas and his colleagues have laid out perhaps the most complete model for how the solar winds of a Wolf-Rayet star could have blasted aluminum-26 into our solar system as it formed. Afterward, the Wolf-Rayet star, with a lifetime of only a few million years, would most likely have collapsed into a black hole, although evidence for this would be long gone, Dwarkadas said.

There are problems with the Wolf-Rayet idea, Lugaro said. For instance, a Wolf-Rayet star creates such an energetic environment that it should have torn our newly formed solar system apart.

Boss still favors the theory that our cloud of dust was ignited by a supernova. Lugaro does not. "At the moment, from the nuclear-physics point of view," she said, "I favor the winds of the Wolf-Rayet stars." However, she said, new information could change her mind next week. "This is a problem that needs to be looked at from different angles. We are still fighting a bit about this."


Original Submission

posted by hubie on Saturday March 07, @05:40AM   Printer-friendly

You've heard of C++ and Windows, C and Linux. How about HolyC and TempleOS?

Tech loves a clean narrative; Genius builds the thing, the thing changes the world, everyone claps, and roll credits. The story of Terry A. Davis refuses to behave that way. Because yes, he built an entire operating system largely by himself. Yes, he wrote his own programming language to go with it. Yes, the technical achievement still makes seasoned developers raise an eyebrow and quietly mutter, "okay, that's... a lot."

But this is not a triumphant startup story. It's messier than that. More human. And, at points, genuinely uncomfortable to sit with. TempleOS didn't come out of a polished lab with venture funding and a product roadmap. It came out of one man's apartment, one man's conviction, and one man's increasingly fragile grip on reality.

See also: TempleOS Creator Passes


Original Submission

posted by hubie on Saturday March 07, @12:53AM   Printer-friendly

Jon Retting has released vscreen, a Rust service that gives AI agents a full Chromium browser with live WebRTC streaming — you see exactly what the AI sees in real-time and can take over mouse and keyboard at any point. The project provides 63 MCP (Model Context Protocol) tools for browser automation: navigation, screenshots, element discovery, cookie/CAPTCHA handling, and multi-agent coordination via lease-based locking.

Built from scratch in Rust — not a Puppeteer wrapper — the codebase is ~31,000 lines across 8 crates with unsafe forbidden, 510+ tests, 3 fuzz targets, and supply chain auditing via cargo-deny. Available as pre-built Linux binaries and Docker images. Source-available, non-commercial license.

https://github.com/jameswebb68/vscreen
https://dev.to/lowjax/vscreen-deep-dive-how-63-mcp-tools-let-ai-agents-actually-use-the-internet-4gij
https://dev.to/lowjax/i-built-a-tool-that-lets-ai-agents-browse-the-real-internet-and-you-can-watch-them-do-it-2fff


Original Submission

posted by hubie on Friday March 06, @08:11PM   Printer-friendly
from the was-it-ever-a-secret? dept.

Total anonymity online is impossible, and it's dangerous to claim otherwise:

To be fair, not all VPN companies are pushing this false narrative -- CNET’s picks for the best VPNs are all very clear about what their services can and can’t do. But too many companies, including a few high-profile VPN providers, continue to keep the myth alive.

Even a VPN provider as established and well-known as CyberGhost continues to promote this dangerous falsehood. The company boldly states on its website that its service can help users “go completely anonymous and surf the internet without privacy worries,” and that they can “enjoy complete anonymity & protection online” with CyberGhost.

To be fair, CyberGhost does mention in an FAQ section tucked away at the bottom of its home page that “no VPN service can make you 100% anonymous online,” but the messaging from the company is nonetheless confusing and avoidable.

This isn't just a case of harmless exaggerated marketing -- it's reckless. Using a VPN while under the impression that it's a silver bullet for online anonymity can put you in a bad spot, even if you have nothing to hide. If you use a social media platform to share sensitive information online with someone, or if you're an investigative journalist in a region whose government practices oppressive digital surveillance, you'll still be at risk, even with a VPN.

You can't simply throw good judgment and all other basic privacy principles out the window just because you think your VPN gives you an all-encompassing invisibility cloak on the internet whenever you switch it on. It's time to dial back the hyperbole and be clear about how a VPN can and can't protect you online, starting with why all this talk about data matters.

[...] Whenever you’re logged in to a service like Google, Facebook, TikTok, Instagram, X, Amazon or Netflix, all of your activity on those platforms can be tracked by the companies and linked directly back to you. Data related to the search terms you enter, links you click on, videos you watch, items you purchase, ads you interact with and content you share are all collected and used to create a detailed profile on your interests and online habits.

Additionally, personal information such as your name, username, address, payment data and email address, along with unique identifiers like your IP address, browser type, device type and operating system can all be tracked.

[...] Yet none of this stops some VPN providers from saying that VPNs can make you totally anonymous online.

In reality, VPNs are just a small piece of the much greater online privacy and security puzzle. VPNs like Mullvad and Windscribe let you sign up and use their services without supplying any personal information whatsoever -- which is about as close as you can get to anonymity with a VPN. Other providers like Proton, NordVPN, ExpressVPN and Surfshark offer additional privacy and security services on top of a VPN that you can bundle under a single subscription, which can help you better round out your cybersecurity toolkit.

Everyday citizens simply looking to boost their online protections should be fine with a VPN, password manager and antivirus. But if you're an activist, lawyer, whistleblower, investigative journalist or anyone else with critical privacy needs, there's a lot more you should do to protect yourself and become as anonymous as possible online.

[...] While neither a VPN nor any single privacy or security tool can guarantee you anonymity, a well-rounded cybersecurity toolkit, some strategic actions and a little bit of common sense can go a long way toward protecting your privacy.


Original Submission

posted by hubie on Friday March 06, @03:30PM   Printer-friendly

By testing agent-to-agent interactions, researchers observed catastrophic system failures. Here's why that's bad news for everyone:

An increasing body of work points to the risks of agentic AI, such as last week's report by MIT and collaborators that documented a lack of oversight, measurement, and control for agents.

However, what happens when one AI agent meets another? Evidence suggests things can turn even worse, according to a report published this week by scholars at Stanford University, Northwestern, Harvard, Carnegie Mellon, and several other institutions.

The result of agent-to-agent interaction was the destruction of server computers, denial-of-service attacks, vast over-consumption of computing resources, and the "systematic escalation of minor errors into catastrophic system failures."

"When agents interact with each other, individual failures compound and qualitatively new failure modes emerge," wrote lead author Natalie Shapira of Northeastern University and collaborators in the report, 'Agents of Chaos.'

"This is a critical dimension of our findings," Shapira and team wrote, "because multi-agent deployment is increasingly common and most existing safety evaluations focus on single-agent settings."

The findings are especially timely given that multi-agent interactions have burst into the mainstream of AI with the recent fervor over the bot social platform Moltbook. That kind of multi-agent hub makes it possible for agentic AI systems to exchange data and carry out instructions on one another that weren't previously possible, largely without any humans in the loop.

The report, which can be downloaded from the arXiv pre-print server, describes a 'red team' test of interacting agents over two weeks, with attempts to find weaknesses in a system by simulating hostile behavior.

What emerged in the research is a system in which humans are mostly absent. Bots send information back and forth, and instruct each other to carry out commands.

Among the many disturbing findings are agents that spread potentially destructive instructions to other agents, agents that mutually reinforce bad security practices via an echo chamber, and agents that engage in potentially endless interactions, consuming vast system resources with no clear purpose.

[...] The premise of the researchers' work is that agentic AI can carry out actions without a person typing in a prompt, as you do with ChatGPT. Agentic AI can be given access to various resources through which to carry out actions. Those resources include email accounts and other communication channels, such as Discord, Signal, Telegram, and more. As they use email and these channels, bots can not only carry out actions but also communicate with and act on other bots.

[...] Among fundamental issues, the underlying LLMs treated both data and commands at the prompt as the same thing, leading to prompt injection.

In the interactions, the authors identified a boundary problem. Agents disclosed "artifacts," such as information obtained from email servers or Discord, without an apparent sense of who should see the information. At the heart of that approach was a lack of a "reliable private deliberation surface in deployed agent stacks." In short, an individual LLM may or may not disclose "reasoning" steps at the prompt. But agents seem to lack well-crafted guardrails and will disclose information in many ways.

The agents also had "no self-model," by which they mean, "agents in our study take irreversible, user-affecting actions without recognizing they are exceeding their own competence boundaries." An example of this issue is when two agents agree to engage in a back-and-forth dialogue without a human, pursuing that approach indefinitely, exhausting system resources.

In an infinite-loop scenario, agents may interact indefinitely, leading to an "infinite loop" and consequent exhaustion of system resources.

"The agents exchanged ongoing messages over the course of at least nine days," the researchers wrote, "consuming approximately 60,000 tokens at the time of writing." Tokens are how OpenAI and others price access to their cloud APIs. Consuming more tokens inflates AI costs, which is already a big issue in an era of rising prices.

The bottom line is that someone has to take responsibility for what is contingent and what is fundamental, and find solutions for both.

Right now, there is no responsibility for an agent per se, noted the researchers: "These behaviors expose a fundamental blind spot in current alignment paradigms: while agents and surrounding humans often implicitly treat the owner as the responsible party, the agents do not reliably behave as if they are accountable to that owner."

That concern means everyone building these systems must deal with the lack of responsibility: "We argue that clarifying and operationalizing responsibility may be a central unresolved challenge for the safe deployment of autonomous, socially embedded AI systems."

arXiv link: https://arxiv.org/abs/2602.20021


Original Submission

posted by janrinok on Friday March 06, @10:43AM   Printer-friendly

"Ultimately, we want to build a fleet of electric harvesters"

The Moon has received a lot of attention in recent months, particularly the surface of Earth's cold and dusty companion.

This has largely been driven by a decision from SpaceX founder Elon Musk to pivot, at least in the near term, from Mars to lunar surface activities and the potential for using material there to build large satellites. But there has been a notable shift from NASA, too, which has started talking a lot more about building up elements of a base on the surface rather than an orbiting space station known as the Gateway.

In short, the world's most successful space company and the largest space agency have both increased their lunar ambitions, suggesting a greater frequency of missions to the Moon in the coming years.

For companies that have long-term business plans focused around the surface of the Moon, these are very positive developments. And two of these lunar startups, Astrolab and Interlune, announced Tuesday morning they are forming a partnership amid this favorable environment.

Astrolab is one of three firms vying to build rovers for NASA's scientific activities on the surface of the Moon, as well as to provide transportation for its astronauts. But the company has been working with commercial customers as well, and one of the most important long-term ones could be a Helium-3 mining company called Interlune.

"Ultimately, we want to build a fleet of electric harvesters that will go to the Moon and excavate, extract and separate Helium-3 from the lunar regolith," said Interlune chief executive Rob Meyerson. "The FLEX Rover is a great platform to go do that."

This is not the first time the two companies have worked together. Last August, Interlune announced that it would fly a multispectral camera on a smaller prototype rover being built by Astrolab. This camera will be used to estimate helium-3 quantities and concentration in Moon dirt, or regolith.

This FLIP rover, about the size of a go-kart, is due to launch later this year on a lunar lander built by Astrobotic. It will fly atop the Griffin lander, taking the place of NASA's VIPER rover, which has been moved to another spacecraft.

The mission will therefore be a learning exercise for both Astrolab, in testing out its software and other features of a small lunar rover, as well as Interlune, which will seek to ground truth data about the concentration of Helium-3 that has previously been estimated from samples returned to Earth during the Apollo program.

In addition to FLIP, Astrolab is developing a larger rover, FLEX, that is about the size of a minivan. This vehicle has a horseshoe-shaped chassis that can accommodate about 3 cubic meters of payload. This allows for a broad array of activities, from carrying multiple scientific instruments across the Moon and providing a long-distance rover for two astronauts, to moving large equipment or, in the case of Interlune, serving as a mobile harvester.

"Our thesis is to make the most versatile platform possible so we can serve a wide array of customers and achieve NASA's goal of being one customer among many," said Jaret Matthews, Astrolab founder and chief executive, in an interview. "So we have essentially a modular approach that allows us to either pick up cargo or implements or payloads. And so in this case, the excavating equipment that Interlune is developing would basically go under the belly of the rover."

The companies did not say when they are scheduled to deploy an initial harvester, but both are working toward that goal. It is likely that a FLEX rover will be one of the payloads on the first SpaceX Starship mission to the lunar surface—probably, but not certainly, the lunar demo mission without crew—planned to fly to the Moon in 2027 or 2028. And Interlune has been working with an industrial equipment manufacturer, Vermeer, to build a harvester to excavate and separate Helium-3 from the lunar surface.

Helium-3 does not occur naturally on Earth, and it exists in only very limited quantities from nuclear weapons tests, nuclear reactors, and radioactive decay. It has several applications, but the most near-term use is in cryogenics, Meyerson believes. The company has already announced contracts for the sale of thousands of liters for very low-temperature refrigeration. But first it must demonstrate the ability to mine and refine the material, which exists in small quantities in lunar soil, and get it back to Earth. This is a difficult challenge, of course, but having partners to move across the Moon and get to and from there helps a lot.

Astrolab and Interlune plan to undertake prototype testing of a mobile harvester in Houston, where there is a new commercial facility known as the Texas A&M University Space Institute. This institute is currently under construction at NASA's Johnson Space Center as the space agency seeks to broaden support for commercial space activities.


Original Submission